A goal of cloud service management is to design self-adaptable auto-scaler toreact to workload fluctuations and changing the resources assigned. The keyproblem is how and when to add/remove resources in order to meet agreedservice-level agreements. Reducing application cost and guaranteeingservice-level agreements (SLAs) are two critical factors of dynamic controllerdesign. In this paper, we compare two dynamic learning strategies based on afuzzy logic system, which learns and modifies fuzzy scaling rules at runtime. Aself-adaptive fuzzy logic controller is combined with two reinforcementlearning (RL) approaches: (i) Fuzzy SARSA learning (FSL) and (ii) FuzzyQ-learning (FQL). As an off-policy approach, Q-learning learns independent ofthe policy currently followed, whereas SARSA as an on-policy alwaysincorporates the actual agent's behavior and leads to faster learning. Bothapproaches are implemented and compared in their advantages and disadvantages,here in the OpenStack cloud platform. We demonstrate that both auto-scalingapproaches can handle various load traffic situations, sudden and periodic, anddelivering resources on demand while reducing operating costs and preventingSLA violations. The experimental results demonstrate that FSL and FQL haveacceptable performance in terms of adjusted number of virtual machine targetedto optimize SLA compliance and response time.
展开▼